8 research outputs found

    STANCE: Locomotion Adaptation over Soft Terrain

    Get PDF
    Whole-body Control (WBC) has emerged as an important framework in locomotion control for legged robots. However, most of WBC frameworks fail to generalize beyond rigid terrains. Legged locomotion over soft terrain is difficult due to the presence of unmodeled contact dynamics that WBCs do not account for. This introduces uncertainty in locomotion and affects the stability and performance of the system. In this paper, we propose a novel soft terrain adaptation algorithm called STANCE: Soft Terrain Adaptation and Compliance Estimation. STANCE consists of a WBC that exploits the knowledge of the terrain to generate an optimal solution that is contact consistent and an online terrain compliance estimator that provides the WBC with terrain knowledge. We validated STANCE both in simulation and experiment on the Hydraulically actuated Quadruped (HyQ) robot, and we compared it against the state of the art WBC. We demonstrated the capabilities of STANCE with multiple terrains of different compliances, aggressive maneuvers, different forward velocities, and external disturbances. STANCE allowed HyQ to adapt online to terrains with different compliances (rigid and soft) without pre-tuning. HyQ was able to successfully deal with the transition between different terrains and showed the ability to differentiate between compliances under each foot.Comment: 12 pages, 11 figure

    On Terrain-Aware Locomotion for Legged Robots

    No full text
    Legged robots are advancing towards being fully autonomous as can be seen by the recent developments in academia and industry. To accomplish breakthroughs in dynamic whole-body locomotion, and to be robust while traversing unexplored complex environments, legged robots have to be terrain aware. Terrain-Aware Locomotion (TAL) implies that the robot can perceive the terrain with its sensors, and can take decisions based on this information. The decisions can either be in planning, control, or in state estimation, and the terrain may vary in geometry or in its physical properties. TAL can be categorized into Proprioceptive Terrain-Aware Locomotion (PTAL), which relies on the internal robot measurements to negotiate the terrain, and Exteroceptive Terrain-Aware Locomotion (ETAL) that relies on the robot\u2019s vision to perceive the terrain. This thesis presents TAL strategies both from a proprioceptive and an exteroceptive perspective. The strategies are implemented at the level of locomotion planning, control, and state estimation, and are using optimization and learning techniques. The first part of this thesis focuses on PTAL strategies that help the robot adapt to the terrain geometry and properties. At the Whole-Body Control (WBC) level, achieving dynamic TAL requires reasoning about the robot dynamics, actuation and kinematic limits as well as the terrain interaction. For that, we introduce a Passive Whole-Body Control (pWBC) framework that allows the robot to stabilize and walk over challenging terrain while taking into account the terrain geometry (inclination) and friction properties. The pWBC relies on rigid contact assumptions which makes it suitable only for stiff terrain. As a consequence, we introduce Soft Terrain Adaptation aNd Compliance Estimation (STANCE) which is a soft terrain adaptation algorithm that generalizes beyond rigid terrain. STANCE consists of a Compliant Contact Consistent Whole-Body Control (c3WBC) that adapts the locomotion strategies based on the terrain impedance, and an online Terrain Compliance Estimator (TCE) that senses and learns the terrain impedance properties to provide it to the c 3WBC. Additionally, we demonstrate the effects of terrains with different impedances on state estimation for legged robots. The second part of the thesis focuses on ETAL strategies that makes the robot aware of the terrain geometry using visual (exteroceptive) information. To do so, we present Vision-Based Terrain-Aware Locomotion (ViTAL) which is a locomotion planning strategy. ViTAL consists of a Vision-Based Pose Adaptation (VPA) algorithm to plan the robot\u2019s body pose, and a Vision-Based Foothold Adaptation (VFA) algorithm to select the robot\u2019s footholds. The VFA is an extension to the state of the art in foothold selection planning strategies. Most importantly, the VPA algorithm introduces a different paradigm for vision-based pose adaptation. ViTAL relies on a set of robot skills that characterizes the capabilities of the robot and its legs. These skills are then learned via self-supervised learning using Convolutional Neural Networks (CNNs). The skills include (but are not limited to) the robot\u2019s ability to assess the terrain\u2019s geometry, avoid leg collisions, and to avoid reaching kinematic limits. As a result, we contribute with an online vision-based locomotion planning strategy that selects the footholds based on the robot capabilities, and the robot pose that maximizes the chances of the robot succeeding in reaching these footholds. Our strategies are extensively validated on the quadruped robots HyQ and HyQReal in simulation and experiment. We show that with the help of these strategies, we can push dynamic legged robots one step closer towards being fully autonomous and terrain aware

    Inertial Properties in Haptic Devices: Non-Linear Inertia Shaping vs. Force Feedforward

    Get PDF
    The inertia of haptic devices limits the user's manipulation and dynamically couples the Cartesian motion, which influences the transparency and fidelity in haptic feedback. By employing force-torque sensing, we investigate two approaches to reduce the apparent inertial effect of haptic devices and to overcome dynamic coupling. First, in order to shape the apparent inertia felt by the user, non-linear inertia shaping (NIS) is presented and introduced to the field of haptics. NIS is based on non-linear dynamic decoupling (NLD). Second, as a standard approach, force feedforward control (FF) is presented that uniformly scales down the apparent inertia. We demonstrate that FF is a special case of NIS, under the assumption that gravitational, centripetal and Coriolis terms are neglected. Simulations and experiments were conducted on DLR's bi-manual haptic device HUG. It is shown that NIS is suited to compensate for the coupling effects, while FF can reduce the apparent inertia more effectively

    Reinforcement Learning for Legged Robots: Motion Imitation from Model-Based Optimal Control

    Full text link
    We propose MIMOC: Motion Imitation from Model-Based Optimal Control. MIMOC is a Reinforcement Learning (RL) controller that learns agile locomotion by imitating reference trajectories from model-based optimal control. MIMOC mitigates challenges faced by other motion imitation RL approaches because the references are dynamically consistent, require no motion retargeting, and include torque references. Hence, MIMOC does not require fine-tuning. MIMOC is also less sensitive to modeling and state estimation inaccuracies than model-based controllers. We validate MIMOC on the Mini-Cheetah in outdoor environments over a wide variety of challenging terrain, and on the MIT Humanoid in simulation. We show cases where MIMOC outperforms model-based optimal controllers, and show that imitating torque references improves the policy's performance

    Passive Whole-body Control for Quadruped Robots: Experimental Validation over Challenging Terrain

    Get PDF
    International audienceWe present experimental results using a passive whole-body control approach for quadruped robots that achieves dynamic locomotion while compliantly balancing the robot's trunk. We formulate the motion tracking as a Quadratic Program (QP) that takes into account the full robot rigid body dynamics, the actuation limits, the joint limits and the contact interaction. We analyze the controller's robustness against inaccurate friction coefficient estimates and unstable footholds, as well as its capability to redistribute the load as a consequence of enforcing actuation limits. Additionally, we present practical implementation details gained from the experience with the real platform. Extensive experimental trials on the 90 kg Hydraulically actuated Quadruped (HyQ) robot validate the capabilities of this controller under various terrain conditions and gaits. The proposed approach is superior for accurate execution of highly dynamic motions with respect to the current state of the art

    On Slip Detection for Quadruped Robots

    No full text
    Legged robots are meant to autonomously navigate unstructured environments for applications like search and rescue, inspection, or maintenance. In autonomous navigation, a close relationship between locomotion and perception is crucial; the robot has to perceive the environment and detect any change in order to autonomously make decisions based on what it perceived. One main challenge in autonomous navigation for legged robots is locomotion over unstructured terrains. In particular, when the ground is slippery, common control techniques and state estimation algorithms may not be effective, because the ground is commonly assumed to be non-slippery. This paper addresses the problem of slip detection, a first fundamental step to implement appropriate control strategies and perform dynamic whole-body locomotion. We propose a slip detection approach, which is independent of the gait type and the estimation of the position and velocity of the robot in an inertial frame, that is usually prone to drift problems. To the best of our knowledge, this is the first approach of a quadruped robot slip detector that can detect more than one foot slippage at the same time, relying on the estimation of measurements expressed in a non-inertial frame. We validate the approach on the 90 kg Hydraulically actuated Quadruped robot (HyQ) from the Istituto Italiano di Tecnologia (IIT), and we compare it against a state-of-the-art slip detection algorithm
    corecore